Determinacy of Borel games I

I’m trying to understand the famous result of Donald Martin that Borel games are determined, and by that I mean not just understand the proof line by line, but also understand why the lines are as they are. I have found some nice presentations online, in particular these notes by Shehzad Ahmed and this Masters thesis by Ross Bryant, which have been very helpful, not just in presenting the proofs line by line but also, to some extent, in doing things like explaining certain definitions that would otherwise seem a little strange. (Ahmed’s notes are good for a quick overview and Bryant’s thesis is good if you want full details.) Nevertheless, I think one can go further in that direction and speculate about how Martin came up with his proof. I’m actually talking not about his original 1975 proof but about a simpler argument he discovered about ten years later in a paper that, thanks to the generosity of the AMS, I have not been able to look at. (I am on holiday in France, or I suppose I could have trudged over to my departmental library, but for some reason I find I can never bear to do that even if I’m in Cambridge.)

Understanding a proof in that kind of detailed way takes some effort, and usually everybody has to make that kind of effort for themselves. However, that ought not to be the case in the internet age: in this post I’m going to try to write an account of the proof in a way that makes it possible for others to understand it properly without making much effort. Whether I’ll succeed I don’t know. [Added later: I have now written three posts, and expect to finish in one more. I have got far enough to be confident that I will actually manage to write that fourth post. So at least I will end up with a reasonable understanding of the proof, even if I don’t manage to transmit it to anyone else.]

One thing I believe quite strongly is that it is better to be in a position where you can remember the ideas in a way that makes it easy to reconstruct the details than to be in a position where you have all the details in front of you but are less certain of the underlying ideas. So I may skimp on some of the details, but only if I am confident that they really have been reduced to easy exercises.

What is the statement of the theorem?

This takes some time. I need to say what a Borel set is, what a game is, what it means for a game to be determined, and why not all games are determined. I’ll start with the stuff about games.

What is a game?

Let us write X for the set \mathbb{N}^{\mathbb{N}} of all infinite sequences of positive integers. If A is a subset of X, then we can define a two-player game in a simple way as follows. Player I chooses a positive integer n_1, then Player II chooses a positive integer n_2, then Player I chooses a positive integer n_3, and so on. At the end of the game, when they have between them chosen an infinite sequence \mathbf{n}=(n_1,n_2,\dots)\in X, Player I wins if \mathbf{n}\in A and Player II wins if \mathbf{n}\notin A.

To make this definition slightly more intuitive, let me make the simple observation that if you choose a suitable set A, then you have the game of chess. Let me do it in a very crude way. Choose an enumeration of all the possible positions in chess. Call a pair of integers (m,n) a legal move for white if m and n are both associated with chess positions in this enumeration, and there is a valid chess move for white that takes the board from position m to position n. Similarly, we can define a legal move for black. Now let F be the set of all sequences (n_1,n_2,\dots,n_k) with the following properties.

(i) n_1 corresponds to a position that can be reached by white from the starting position.

(ii) If all pairs (n_i,n_{i+1}) are legal moves, then the sequence (n_1,n_2,\dots,n_k) corresponds to a game of chess played according to the usual rules.

(iii) If not all pairs are legal moves, then k is even and the only exception is the pair (n_{k-1},n_{k}).

Roughly speaking, the sequence either corresponds to a normal game of chess or to a game of chess that continues as normal until black cheats and white refuses to play on. (Note that chess has a rule that if you revisit the same position three times then it’s a draw. Since there are only finitely many possible positions, there is an upper bound on the number of possible moves in any game of chess.)

Now define A to be the set of infinite sequences that have an initial segment in F. If Player I wants to get a sequence to be in A, then she must play (integers corresponding to) legal chess moves for white until either she wins the game of chess or Player II cheats — which we count as a win for Player I. Similarly, if Player II wants to get a sequence not to be in A, then he must play (integers corresponding to) legal chess moves and must win the corresponding game of chess.

I realize after writing all that that there are of course draws in chess. We can model that by having three sets: a win for Player I, a draw, and a win for Player II. However, in the discussion of abstract games, it is usual to have just a set and its complement, so that one player always wins.

A quick remark about pronouns: it is almost impossible to avoid the he/she question when describing a run of a game, so I’m not going to try. However, since there are two players I’ve decided to make Player I “she” and Player II “he”, which seems rather satisfactory.

Strategies and winning strategies.

A key definition is that of a strategy. Roughly speaking, this means a way of deciding what to do in any given situation. We can formalize that very easily: a strategy for Player I is a function \sigma that maps sequences (n_1,n_2,\dots,n_{2k}) to integers n. (We allow k to be zero, so the sequences may be empty.) Player I plays according to the strategy \sigma if at the end of the game we have a sequence (n_1,n_2,\dots) such that n_{2k+1}=\sigma(n_1,\dots,n_{2k}) for every k. In other words, Player I uses the function \sigma to decide what positive integer to play, given the sequence so far. A strategy for Player II is defined similarly, but now the function is defined on odd-length sequences. Note that if \sigma is a strategy for Player I and \tau is a strategy for Player II, then the two strategies define a unique infinite sequence — the sequence that results if Player I plays with strategy \sigma and Player II plays with strategy \tau. This sequence is denoted by \sigma\circ\tau.

A strategy \sigma is a winning strategy for Player I if \sigma\circ\tau\in A for every strategy \tau of Player II. Equivalently, \sigma is a winning strategy for Player I if every sequence (n_1,n_2,\dots) that satisfies the condition n_{2k+1}=\sigma(n_1,\dots,n_{2k}) for every k belongs to A. Similarly, we can define a winning strategy for Player II.

A game is determined if one or other player has a winning strategy.

Why isn’t it obvious that all games are determined?

Most people, when they meet this definition for the first time, have a strong intuition that every game must be determined. If Player I does not have a winning strategy, then trivially, or so it seems, Player II has a way of defeating Player I — that is, a winning strategy. And yet, if you believe in the axiom of choice, the conclusion of this argument is false. So at the very least it can’t be trivially true that every game is determined. So what is wrong with the argument I’ve just given?

To answer that, we need to try to understand as precisely as possible what the argument actually is that underlies our feeling that one or other of the two players must have a winning strategy. Here is one possibility. Let’s suppose that Player I does not have a winning strategy. Then we can define a strategy for Player II as follows. After Player I’s first move, there must be a move for Player II that results in a position from which Player I still doesn’t have a winning strategy, or else Player I trivially would have a winning strategy. (This argument is actually correct, so I make no apology for using the word “trivially”.) But then this observation can be repeated: whatever Player I does next, there must be a move for Player II that results in a position from which Player I does not have a winning strategy. In general, Player II’s strategy can be defined as follows: repeatedly play the least integer n that leads to a position that is not a win for Player I.

If Player II uses that strategy, then at no stage in the game does Player I have a winning strategy. But does that imply that Player I loses the game? Not necessarily. For example, suppose Player I’s objective is to create the all 1s sequence. Then if Player II decides to cooperate and they both play nothing but 1s, there is no stage at which Player I has a winning strategy, and yet Player I ends up winning.

Open games

What the proof does show is that a certain important class of games is determined: the class of open games. Informally, an open game is one with the property that if Player I wins, then there must be a finite stage by which she has already won, in the sense that all extensions of the sequence reached so far are in A.

A simple but not too simple example of an open game is the set of sequences (n_1,n_2,\dots) with at least n_2 1s. Player I can win this game easily, by simply choosing 1 every move. Moreover, if Player I wins, then there will be some stage at which at least n_2 1s have been chosen, and after that it doesn’t make any difference what anyone plays. Note, however, that Player II can postpone this moment for as long as he likes, by choosing n_2 sufficiently large. Thus, while a win for Player I must happen by some finite stage, there isn’t necessarily a bound on how long it takes to reach that stage.

Open games are called open games because the winning sets are open in the product topology on \mathbb{N}^{\mathbb{N}}, where \mathbb{N} itself has the discrete topology. The basic open sets in this topology can be described as follows. Let s=(n_1,\dots,n_k) be a finite sequence. Then we can define an open set X_s by taking all sequences with s as an initial segment. These are the basic open sets, so an open set is a union of sets of the form X_s. If A is an open set and \mathbf{n} is an infinite sequence that belongs to A, then there must be a finite sequence s such that \mathbf{n}\in X_s and X_s\subset A. That is, there must be some initial segment s of \mathbf{n} such that every sequence that starts with s belongs to A.

Suppose that A is an open game, and that Player I does not have a winning strategy. Then consider what the earlier argument tells us. It allows Player II to play without ever reaching a stage where Player I has a winning strategy. In particular, Player II can play in such a way that it is never the case that every continuation of the sequence so far belongs to A (in which all possible strategies for Player I would be winning strategies). But if A is open, this means that the sequence reached at the end of the game does not belong to A.

Thus, one way of answering the question of why it isn’t obvious that all games are determined is to say that the obvious “proof” that springs to mind shows instead the weaker statement that open games are determined, since these games, by their very definition, have the property that if Player II can manage not to lose at any finite stage, then he wins. The result that open games are determined is due to Gale and Stewart in 1953. The proof is easy, but the paper of Gale and Stewart was also the one where infinite games of this kind were first introduced and where the question of Borel determinacy was first posed.

If you still feel funny about this statement, then a shorter argument against the obviousness may be helpful. It’s that the statement that Player I has a winning strategy for A can be written as follows: there exists a strategy \sigma for Player I such that for every strategy \tau for Player II the sequence \sigma\circ\tau\in A. If this is false, then we can conclude that for every \sigma there exists \tau such that \sigma\circ\tau\notin A. But for Player II to have a winning strategy we need something potentially far stronger: that \tau can be chosen independently of \sigma. It turns out that for some games, declaring your strategy in advance puts you at a huge disadvantage (just as it does if you want to play the who-can-name-the-larger-number game).

How to build a nondetermined game

If you give yourself the luxury of the axiom of choice, then showing that there are nondetermined games is straightforward. Note first that the set of possible strategies has the same cardinality as that of the continuum, since a strategy is a function from a countable set (of all positive integer sequences of even length or of all positive integer sequences of odd length) to a countable set (of positive integers). Therefore, we can well-order the set of all strategies in such a way that each strategy has fewer than continuum-many predecessors.

We now build a set A in such a way that no strategy is a winning strategy. Let us call a sequence \mathbb{n} consistent with a strategy \sigma if every other move is played as \sigma dictates that it should be. What we need to do is ensure that for every strategy \sigma for Player I there is a sequence consistent with \sigma that belongs to A^c, and for every strategy \tau for Player II there is a sequence consistent with \tau that belongs to A.

Suppose that we have chosen sequences for every strategy less than \sigma (in our well-ordering). Then we have chosen fewer than continuum many sequences. But there are continuum many sequences consistent with \sigma (since for every other term of the sequence there is a free choice), so amongst those sequences will be one that has not yet been chosen and that is consistent with \sigma. We are free to put that sequence into A^c or A according as \sigma is a strategy for Player I or Player II.

In short, this is a very standard use of the well-ordering principle: because there are lots of sequences consistent with any strategy, and not too many strategies, it is easy to take care of the strategies one by one and produce a game that is not determined.

Is that game really not determined?

Let’s make one more hopeless attempt at proving that all games are determined. Consider the following “strategy” for Player II: whatever strategy \sigma Player I decides on, choose a sequence \mathbb{n} consistent with \sigma and not in A, and ensure that \mathbb{n} is the sequence produced.

The trouble with that idea is that it is not a strategy. What Player II does depends not just on the sequence so far, but on Player I’s strategy, and that is not allowed. This is another way of saying that if you put all your cards on the table right at the beginning, then you put yourself at a big disadvantage in this game.

What are Borel sets like?

I’d better start with the definition, but the main point of this subsection is to give a few examples of sets that are Borel and sets that are not Borel.

A Borel set is anything that you can obtain from the open sets and closed sets using countable intersections and countable unions. I could also mention complements, but it’s an easy exercise to show that if you adopt the above definition, then the Borel sets are closed under taking complements.

Suppose you want to prove that all Borel sets have a certain property. Then an obvious approach is to show that all open sets have that property, all closed sets have that property, and the property is closed under countable intersections and countable unions. For example, one can show in that way that the Borel sets in \mathbb{R} are Lebesgue measurable.

When you meet Borel sets for the first time, it is tempting to think that every Borel set is either an open set, or a countable intersection of open sets, or a countable union of countable intersections of open sets, or … etc. … or the same but starting with closed sets. However, these by no means exhaust all the Borel sets. Let’s say that a Borel set has index k if you can get it from the open or closed sets by at most k alternations of countable intersections and unions. For example, a countable intersection of countable unions of closed sets would have index 2 (or rather, index at most 2). Now suppose that for each k we have a Borel set A_k of index k. Then the union \bigcup_kA_k is a Borel set and there is no reason to suppose that it has index r for any r. If you are used to ordinals, you will immediately want to say that this union has index \omega and that the process of generating Borel sets can be continued transfinitely. And indeed it can be shown (and the proof is not very hard — it is not unreasonable to try to prove it for yourself, though it is also not embarrassing to fail) that for every countable ordinal \alpha there is a Borel set of index \alpha that is not of index \beta for any \beta<\alpha.

Here’s an example of a Borel set of sequences. Define A to be the set of sequences (n_1,n_2,\dots) such that n_k\to\infty. If we write out this definition in full, we see that (n_1,n_2,\dots)\in A if and only if

\forall M\ \exists N\ \forall k\geq N\ n_k\geq M

Therefore,

A=\bigcap_M\ \bigcup_N\ \bigcap_{k\geq N}\ \{\mathbf{n}:n_k\geq M\}

The sets U_{k,M}=\{\mathbf{n}:n_k\geq M\} are open, since if a sequence belongs to U_{k,M}, then so does any other sequence that agrees with it in the first k entries. So what we have here is a Borel set of index at most 3. In fact, it has index 2, since basic open sets are also closed. (To see this here, note that the complement of U_{k,M} is the set of all \mathbf{n} such that n_k<M, which is just as clearly open.)

Here’s another example: the set of sequences for which at least one positive integer is repeated infinitely many times. The condition for belonging to this set is

\exists n\ \forall N\ \exists k\geq N\ n_k=n

This again translates into a countable union of countable intersections of countable unions of basic open sets.

I don’t have any good examples of Borel sets of large finite index, or of Borel sets of infinite index. It’s not hard to construct them, but it is less easy to imagine them coming up naturally. A rough reason for this is that a natural definition of a Borel set of index k requires k alternations of quantifiers, so a Borel set of index \omega, for example, would then require us to have defined natural sets with arbitrarily long alternations of quantifiers.

Just in case anyone objects to what I’ve just said, let me mention a quick way of constructing Borel sets of arbitrarily high index. Let \phi be a bijection between \mathbb{N} and \mathbb{N}^2. Given an infinite sequence \mathbf{n}, let W_{\mathbf{n}}=\{\phi(n_k):k\in\mathbb{N}\}. Then W_{\mathbf{n}} defines a relation on \mathbb{N}. If we choose our sequence carefully, this relation may turn out to be a well-ordering. If so, then it is a well-ordering of a countable set, and therefore order-isomorphic to some countable ordinal. If I remember correctly, the set A_\alpha of sequences \mathbf{n} such that W_{\mathbf{n}} gives rise to a well-ordering of index less than \alpha turns out to be a Borel set of index \alpha. (If not, then a very similar construction works.)

Given the difficulty in coming up with natural examples of Borel sets of anything more than very small index, it may seem a bit surprising that there are extremely simple and natural definitions of sets that are not Borel sets at all. Here’s one I particularly like. Let \phi be a bijection between \mathbb{N} and the set \mathbb{N}^{(2)} of all subsets of \mathbb{N} of size 2. Given a sequence \mathbb{n}, let G(\mathbb{n}) be the infinite graph that includes the edge \phi(k) if and only if n_k is odd. Now let A be the set of all sequences \mathbb{n} such that the graph G_{\mathbf{n}} contains an infinite complete subgraph. Then A is not Borel.

Why is A not Borel? I won’t prove that here, but I can at least explain why it isn’t obviously Borel. To do that, let me talk about infinite graphs rather than about their encodings as sequences. How do we define the set of graphs that contain an infinite clique? We say that G contains an infinite clique if and only if there exists an infinite set of vertices X such that for any two distinct m,n\in X the edge mn belongs to G. The huge difference between this and the definitions given earlier is that we are quantifying over all infinite subsets of \mathbb{N}. In other words, this definition is naturally a second-order definition (roughly, one that quantifies over subsets of \mathbb{N}) rather than a first-order definition (roughly, one that quantifies over elements).

Of course, it isn’t obvious that a second-order definition isn’t equivalent to some other first-order definition, and indeed for some quite natural second-order definitions that is the case. But by and large one doesn’t expect it. However, there is certainly a sense in which the set above — of graphs that contain an infinite clique — are not particularly complicated: it has a simple and concise definition. It is in fact an example of an analytic set. These can be defined in various equivalent ways, one of which is as projections of Borel sets. The idea is as follows. First, we take a Borel subset of \mathbb{N}^{\mathbb{N}}\times\mathbb{N}^{\mathbb{N}} (which is homeomorphic to \mathbb{N}). The set we take is a set of pairs (\mathbf{m},\mathbf{n}), where \mathbf{m} encodes an infinite subset of \mathbb{N} (in any reasonable way — for example, it could be the set of all k such that m_k=1), and \mathbf{n} encodes a graph with \mathbb{N} as vertex set. The set of pairs we take is the set of all pairs (\mathbf{m},\mathbf{n}) such that the infinite set encoded by \mathbf{m} is the vertex set of a clique in the graph encoded by \mathbf{n}. If we forget the encodings, we are taking pairs (X,G) such that X is infinite and is the vertex set of a clique in G.

The projection of this set B on to the second coordinate gives us the set A of all G such that (X,G) belongs to B for at least one X. In other words, it is precisely the set of graphs that contain an infinite clique.

Borel determinacy

Martin’s theorem is, as its name suggests, the statement that all Borel games are determined. Here are a few reasons that the theorem is not trivial.

1. In order to prove the theorem for Borel sets of index \alpha, Martin iterates the power-set operation on \mathbb{N} \alpha times. In other words, he uses extremely large cardinals. (Here I do not mean large cardinals in the usual sense — by the standards of set theorists these are tiny cardinals, but by the standards of most mathematicians, even the power set of the power set of the power set of the power set of \mathbb{N} is pretty huge.)

2. The above feature of the proof was not some piece of laziness on Martin’s part: Harvey Friedman had earlier shown that they are necessary. I think what this means is roughly as follows. In order to iterate the power set operation many times, you have to use the axiom of replacement many times. Of course, at a successor ordinal, all you need to do is apply the power-set axiom to the previous set, but at the limit stage it is less straightforward. In one way it is easy: you just take the union of all the sets so far. But in order to define this union, you have to take the set of all sets so far, and for that you need to use the axiom of replacement. So my guess is that if you don’t allow yourself the axiom of replacement, you can deduce the existence of the \alpha-times iterated power set of \mathbb{N} from the determinacy of Borel games of index \alpha. But I haven’t looked into this properly, so my guess may be wrong. [Added later: it is indeed wrong — see the helpful discussion below that starts with this comment.]

In any case, there is some precise sense in which proving the determinacy of Borel games requires you to iterate the power-set operation uncountably many times (by which I mean countably many times for each index, but there are uncountably many possible indices).

3. The axiom of choice, as already noted, implies the existence of non-determined games. From that one might expect that all “nice” sets can be shown to be determined. The need for largish cardinals to prove Borel determinacy certainly dents one confidence in this, but once one goes further and considers analytic sets, and more general sets (I’m referring here to the projective sets, which are like the analytic sets except that now you can have an alternation of several second-order quantifiers), it turns out that whether or not they are determined depends on what axioms you are prepared to accept. For instance, to prove that analytic sets are determined, it is helpful if you can assume the existence of a measurable cardinal, which would count as a “large cardinal” in the set-theorist’s sense (though fairly modest as large cardinals go). It is not necessary to use a measurable cardinal (something called zero sharp does the job), but it becomes necessary if you go slightly further up the projective hierarchy.

4. I mentioned earlier that a standard way of proving that all Borel sets have a certain property is to show that open sets and closed sets have it, and that it is closed under countable intersections and unions. Unfortunately, as is easily seen, determined sets are not closed under even pairwise intersections. All you have to do is take a non-determined set A and let A_1=A\cup\{\mathbf{n}:n_1=1\} and A_2=A\cup\{\mathbf{n}:n_1\ne 1\}. Then A_1 and A_2 are obviously determined (since Player I can choose n_1 freely) and their intersection is A.

Of these difficulties, the fourth is perhaps the least serious. If you want to prove an inductive statement and the obvious inductive hypothesis “doesn’t induct”, that does not mean that the situation is hopeless. It does, however, mean that it would be good to find a stronger hypothesis that implies what you want to prove and “does induct”.

That is what Martin does. However, this post has got quite long, so I think I’ll stop it here and move on to Martin’s proof in the next post.

21 Responses to “Determinacy of Borel games I”

  1. Bill Says:

    I’ve only read about half so far, but here are some comments:

    The terminology you are using is a bit confusing. Since the game is (or can be) infinite and can be non-determined, it is better to start by calling ‘winning strategy’ a ‘non-losing strategy’. Then when you talk about open games, it makes more sense to say that if Player II has a non-losing strategy then it is automatically a winning strategy, so the game is determined. It is not clear what it means that Player I does not have a winning strategy.

    Some corrections:

    (iii) If not all pairs are legal moves, then k is even and the only exception is the pair (n_{k-1},n_{k}).

    Moreover, if Player I wins, then there will be some stage at which at least n_2 1s have been chosen, and after that it doesn’t make any difference what anyone plays. Note, however, that Player II can postpone this moment for as long as he likes, by choosing n_2 sufficiently large.

    If A is an open set and \mathbf{n} is an infinite sequence that belongs to A, then there must be a finite sequence s such that \mathbf{n}\in X_s and X_s\subset A.

    Let us call a sequence \mathbf{n} consistent with a strategy \sigma …

    • Bill Says:

      Sorry, my first comment probably does not make sense. If Player I does not have a winning strategy, it means that for all \sigma there exists a sequence n in the complement of A that agrees with \sigma on even-step moves. I guess I just don’t see how this implies that Player II has a winning strategy when A is open, but I will think about this more later.

  2. andrescaicedo Says:

    Great to see you will be discussing this argument! (And nice to hear that you found Shehzad’s note useful; he is my Master’s student.)

  3. porton Says:

    I have not read very carefully, but it seems I’ve caught the main idea (which was probably not said quite explicitly): Every strategy can be beaten.

  4. Monroe Says:

    Regarding the necessity of the Replacement axiom, it is not the case that Borel determinacy proves that the powerset can be iterated $\alpha$ times for every countable ordinal $\alpha$. This is because Borel determinacy is a statement that is local to $V_{\omega+\omega}$, since all reals and strategies and sets of reals live there. So if Borel determinacy is true in the whole universe, then it is true there. We can even find local surrogates for the countable ordinals there by looking at equivalence classes of well-orderings of the natural numbers. But of course $V_{\omega+\omega}$ fails Replacement pretty badly, while satisfying all the other axioms of ZFC. This is an interesting instance of axioms about high-rank sets proving things about low-rank sets.

    • andrescaicedo Says:

      (This is a bit technical, apologies in advance. Hope not many typos survived.)

      To clarify, the optimal version of the results on iterated power sets, following Martin’s unpublished book on determinacy (and, where relevant, improving the results from Friedman’s 1971 paper “Higher Set Theory and Mathematical Practice”), is roughly as follows: (“Roughly”, since I do not want to state the intricacies in the notion of tree that Martin uses. The case T=\mathbb N^{<\mathbb N} suffices for most of what follows.)

      Let \mathsf{ZC}^- be the theory \mathsf{ZFC}, without replacement, or the power set axiom. Working in the theory \mathsf{ZC}^-+\Sigma_1-replacement, if T is a tree, n\in\mathbb N, and \mathcal P^n(T) exists, then all \mathbf\Delta^0_{n+4}(T) games are determined; if \alpha is an infinite countable ordinal, and \mathcal P^\alpha(T) exists, then all \mathbf\Delta^0_{\alpha+3}(T) games are determined.

      This is optimal, in the following sense: For each ordinal \alpha<\omega_1, let \alpha^* be \alpha+4 if \alpha is finite, and \alpha+3 if it is infinite. We then have that there is a transitive model M of \mathsf{ZFC} without the power set axiom, such that \mathcal P^\alpha(\omega)\in M, and \mathbf\Sigma^0_{\alpha^*}-determinacy for countable trees fails.

      (Note that the background theory may be too weak to show that the iterated power sets under discussion actually exist.)

      The converse is not a direct implication, but a consistency result. A problem with stating it is that in order to discuss properties of an ordinal \alpha we need a way of referring to it. Martin addresses this by assuming \alpha is recursive. (Replacing the background theory appropriately, we can circumvent this formality.) Work in \mathsf{ZC}^-+\Sigma_1-replacement. For \alpha recursive, we have that if all \Sigma^0_{\alpha^*} games on \mathbb N^{<\mathbb N} are determined, then there is a (least) ordinal \beta_\alpha such that L_{\beta_\alpha} is a model of \mathsf{ZFC} without the power set axiom, plus "\mathcal P^\alpha(\omega) exists".

      Note that the theory Martin works on is stronger than Zermelo set theory (in view of Monroe's comments, this is unavoidable). His assumption of \Sigma_1-replacement allows us to discuss ordinals (as long as they appear as ranks of countable trees, but this certainly takes us way beyond \omega+\omega); as Martin puts it, "The point of \Sigma_1-replacement is that it
      gives us cartesian products, enough ordinal numbers, and some simple definitions by transfinite recursion." In Friedman's paper, a similar extension of Zermelo set theory is considered as well.

      (Recent additional results have been obtained in the realm of second order arithmetic, by Montalbán and Shore. )

    • gowers Says:

      Many thanks (to both of you) for explaining this to me.

  5. finelli Says:

    small typo:
    “Note, however, that Player II can postpone this moment for as long as he likes, by choosing n_1 sufficiently large.”
    should read:
    “Note, however, that Player II can postpone this moment for as long as he likes, by choosing n_2 sufficiently large.”

    Thanks — corrected.

    • Anonymous Says:

      There’s another similar typo in the same paragraph (an n_1 that should be an n_2).

    • Dömötör Pálvölgyi Says:

      I think Anon meant that in the sentence “Moreover, if Player I wins, then there will be some stage at which at least n_1 1s have been chosen, and after that it doesn’t make any difference what anyone plays.” n_1 should be n_2.

      Thanks. I’ve corrected that too now.

  6. E.L. Wisty Says:

    Reblogged this on Pink Iguana.

  7. william e emba Says:

    A very minor technical correction. The existence of 0# only proves the determinacy of lightface analytic games. Lightface analytic sets may be thought of as the projection of a computable Borel set. If you replace 0# with x#, any real x, and replace computable with computable using the x-membership oracle, you get the lightface relative to x analytic sets. Each analytic set is computable relative to some real.

    A different description involves using a second order existential quantifier followed by some first order quantifiers followed by a matrix to describe any analytic set. (“Matrix” is just the term for the part of a formula after the quantifiers, for formulas written with a single block of quantifiers.) If the matrix refers to explicit reals (known as the “parameters”) then the analytic set is lightface analytic relative to its parameters. As usual, Cantor allows us to assume there is a single parameter.

  8. Anonymous Says:

    Why is k even in iii) under the What is a game? section. Doesn’t the (2k-1, 2k) take care of the requirement that the illegal move be from the second player?

    Thanks — that was a slip, which I have now corrected (by turning (2k-1,2k) into (k-1,k)).

  9. Anonymous Says:

    In the 3rd paragraph below “Open game”, should ” a finite sequence s such that x\in X_s and X_s\subset A” be “a finite sequence s such that n\in X_s and X_s\subset A” ?

    Yes. Thanks — I’ve changed it now.

  10. Real Numbers and Infinite Games, Part I | Matt Baker's Math Blog Says:

    […] theorem of Donald Martin asserts that the game is determined whenever is a Borel set (see this post by Tim Gowers for a detailed discussion of Martin’s theorem).  The Axiom of Determinacy (AD) […]

  11. John Baez Says:

    “However, these by no manes exhaust all the Borel sets.”

    Thanks — corrected now.

  12. Winning Strategies (and Sylver Coinage) – Catbus Says:

    […] outside the scope of this article, it is definitely worth looking at.  Tim Gowers has a series of posts discussing this in detail. It is also possible to study the Axiom of Determinacy and its bizarre […]

  13. David Mumford Says:

    Thanks Tim, I was struggling reading Martin’s proof and it never became clear until I read your remark: the goal of \phi(strategy for top game) is NOT to try to win but to guaranty the play in bottom game lifts to the top game in spite of having no control on your adversary’s moves. Then the scales fell from my eyes!

    David (Mumford)

  14. Prerequisites for understanding Borel determinacy – Math Solution Says:

    […] five-part series of posts discussing the proof, and motivating how one may go about discovering it: 1, 2, 3, 4, […]

  15. Covering game in Borel Determinacy proof – Math Solution Says:

    […] it should be worth looking at Gowers’ account of the proof at his blog; the elucidation of the role of φ (called ψ there) takes place in the third post of the […]

Leave a comment